504 research outputs found

    Local Testing for Membership in Lattices

    Get PDF
    Motivated by the structural analogies between point lattices and linear error-correcting codes, and by the mature theory on locally testable codes, we initiate a systematic study of local testing for membership in lattices. Testing membership in lattices is also motivated in practice, by applications to integer programming, error detection in lattice-based communication, and cryptography. Apart from establishing the conceptual foundations of lattice testing, our results include the following: 1. We demonstrate upper and lower bounds on the query complexity of local testing for the well-known family of code formula lattices. Furthermore, we instantiate our results with code formula lattices constructed from Reed-Muller codes, and obtain nearly-tight bounds. 2. We show that in order to achieve low query complexity, it is sufficient to design one-sided non-adaptive canonical tests. This result is akin to, and based on an analogous result for error-correcting codes due to Ben-Sasson et al. (SIAM J. Computing 35(1) pp1-21)

    Phase Transition for Infinite Systems of Spiking Neurons

    Get PDF
    We prove the existence of a phase transition for a stochastic model of interacting neurons. The spiking activity of each neuron is represented by a point process having rate 1 whenever its membrane potential is larger than a threshold value. This membrane potential evolves in time and integrates the spikes of all presynaptic neurons since the last spiking time of the neuron. When a neuron spikes, its membrane potential is reset to 0 and simultaneously, a constant value is added to the membrane potentials of its postsynaptic neurons. Moreover, each neuron is exposed to a leakage effect leading to an abrupt loss of potential occurring at random times driven by an independent Poisson point process of rate γ> 0. For this process we prove the existence of a value γc such that the system has one or two extremal invariant measures according to whether γ> γc or not.Fil: Ferrari, Pablo Augusto. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Investigaciones Matemáticas "Luis A. Santaló". Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Investigaciones Matemáticas "Luis A. Santaló"; ArgentinaFil: Galves, Antonio. Universidade de Sao Paulo; BrasilFil: Grigorescu, I.. University of Miami; Estados UnidosFil: Löcherbach, E.. Université Paris Seine; Franci

    Efficient and error-correcting data structures for membership and polynomial evaluation

    Get PDF

    AC0(MOD2) lower bounds for the Boolean inner product

    Get PDF
    AC0 ◦MOD2 circuits are AC0 circuits augmented with a layer of parity gates just above the input layer. We study AC0 ◦ MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC0 ◦ MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω( ˜ n 2 ) lower bound for the special case of depth-4 AC0 ◦ MOD2. Our proof of the depth-4 lower bound employs a new “moment-matching” inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match

    The non-conventional use of 99mTc-Tetrofosmine for dynamic hepatobiliary scintigraphy

    Get PDF
    BACKGROUND: Classic dynamic hepatobiliary scintigraphy (DHBS) is commonly performed with 99mTc-Iminodiacetic Acid (IDA) derivatives and represents a non-invasive diagnosis method for biliary dyskinesia, fistulas, surgical anastomosis, etc (1). This study assesses the possibility of performing DHBS with 99mTc-Tetrofosmine (TF), a radiopharmaceutical (RF) dedicated to myocardial perfusion scintigraphy (MPS), but being excreted through the liver. The possibility to use 99mTc-TF for DHBS may be important in situations when the standardized RF for this procedure (IDA derivatives) is not available. MATERIAL AND METHODS: We performed DHBS for 30 patients referred for investigation by internal medicine and surgery departments. The patients had been fasting for12 hours. The dynamic investigation started simultaneously with the intravenous (IV) administration of 37–110 MBq (1–3 mCi) 99mTc-TF. Dynamic images were recorded for 30–45 minutes, one image per minute, followed by static scintigraphy at 1 h, 1.5 h, 2 h, and 3 h after IV injection. RESULTS: The quality of scintigraphic images of the liver and biliary tree obtained at DHBS with 99mTc-TF ensured the correct diagnosis of biliary dyskinesia, stasis, stenosis, and fistulas. CONCLUSIONS: DHBS using 99mTc-TF is justified by the image quality and by the good cost/benefits ratio. Because the IDA derivatives are not always available, this finding may be important for medical practice. 99mTc-TF evacuated through the bile duct allows DHBS interpretation, while the necessary dose is approximately 8 to 20 times smaller than that used for myocardial perfusion scintigraphy. Nuclear Med Rev 2011; 14, 2: 79–8

    Lattice-based locality sensitive hashing is optimal

    Get PDF
    Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC ‘98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes “nearby” points to the same bucket and “far away” points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS ‘06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the ℓ2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA ‘07) and O’Donnell, Wu and Zhou (TOCT ‘14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem

    Lattice-based locality sensitive hashing is optimal

    Get PDF
    Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC ‘98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes “nearby” points to the same bucket and “far away” points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS ‘06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the l2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA ‘07) and O’Donnell, Wu and Zhou (TOCT ‘14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem

    Augmenting graphs to minimize the diameter

    Full text link
    We study the problem of augmenting a weighted graph by inserting edges of bounded total cost while minimizing the diameter of the augmented graph. Our main result is an FPT 4-approximation algorithm for the problem.Comment: 15 pages, 3 figure

    Recognition of architectural and electrical symbols by COSFIRE filters with inhibition

    Get PDF
    The automatic recognition of symbols can be used to automatically convert scanned drawings into digital representations compatible with computer aided design software. We propose a novel approach to automatically recognize architectural and electrical symbols. The proposed method extends the existing trainable COSFIRE approach by adding an inhibition mechanism that is inspired by shape-selective TEO neurons in visual cortex. A COSFIRE filter with inhibition takes as input excitatory and inhibitory responses from line and edge detectors. The type (excitatory or inhibitory) and the spatial arrangement of low level features are determined in an automatic configuration step that analyzes two types of prototype pattern called positive and negative. Excitatory features are extracted from a positive pattern and inhibitory features are extracted from one or more negative patterns. In our experiments we use four subsets of images with different noise levels from the Graphics Recognition data set (GREC 2011) and demonstrate that the inhibition mechanism that we introduce improves the effectiveness of recognition substantially

    Detection of curved lines with B-COSFIRE filters: A case study on crack delineation

    Full text link
    The detection of curvilinear structures is an important step for various computer vision applications, ranging from medical image analysis for segmentation of blood vessels, to remote sensing for the identification of roads and rivers, and to biometrics and robotics, among others. %The visual system of the brain has remarkable abilities to detect curvilinear structures in noisy images. This is a nontrivial task especially for the detection of thin or incomplete curvilinear structures surrounded with noise. We propose a general purpose curvilinear structure detector that uses the brain-inspired trainable B-COSFIRE filters. It consists of four main steps, namely nonlinear filtering with B-COSFIRE, thinning with non-maximum suppression, hysteresis thresholding and morphological closing. We demonstrate its effectiveness on a data set of noisy images with cracked pavements, where we achieve state-of-the-art results (F-measure=0.865). The proposed method can be employed in any computer vision methodology that requires the delineation of curvilinear and elongated structures.Comment: Accepted at Computer Analysis of Images and Patterns (CAIP) 201
    corecore